PyMC is a python module that implements Bayesian statistical models and fitting algorithms, including Markov chain Monte Carlo. Its flexibility and extensibility make it applicable to a large suite of problems. Along with core sampling functionality, PyMC includes methods for summarizing output, plotting, goodness-of-fit and convergence diagnostics.
PyMC provides functionalities to make Bayesian analysis as painless as possible. Here is a short list of some of its features:
In [ ]:
%matplotlib inline
import seaborn as sns; sns.set_context('notebook')
from pymc import Normal, Lambda, observed, MCMC, Matplot, Uniform
from pymc.examples import melanoma_data as data
import numpy as np
# Convert censoring indicators to indicators for failure event
failure = (data.censored==0).astype(int)
# Intercept for survival rate
beta0 = Normal('beta0', mu=0.0, tau=0.0001, value=0.0)
# Treatment effect
beta1 = Normal('beta1', mu=0.0, tau=0.0001, value=0.0)
# Survival rates
lam = Lambda('lam', lambda b0=beta0, b1=beta1, tr=data.treat: np.exp(b0 + b1*tr))
@observed
def survival(value=data.t, lam=lam, f=failure):
"""Exponential survival likelihood, accounting for censoring"""
return sum(f*np.log(lam) - lam*value)
This example will generate 10000 posterior samples, thinned by a factor of 2, with the first half discarded as burn-in. The sample is stored in a Python serialization (pickle) database.
In [ ]:
M = MCMC([beta0, beta1, lam, survival], db='pickle')
M.sample(iter=10000, burn=5000)
In [ ]:
Matplot.plot(beta1)
In [ ]:
from pymc.examples.disaster_model import disasters_array
import matplotlib.pyplot as plt
n_count_data = len(disasters_array)
plt.figure(figsize=(12.5, 3.5))
plt.bar(np.arange(1851, 1962), disasters_array, color="#348ABD")
plt.xlabel("Year")
plt.ylabel("Disasters")
plt.title("UK coal mining disasters, 1851-1962")
plt.xlim(1851, 1962);
We represent our conceptual model formally as a statistical model:
$$\begin{array}{ccc} (y_t | \tau, \lambda_1, \lambda_2) \sim\text{Poisson}\left(r_t\right), & r_t=\left\{ \begin{array}{lll} \lambda_1 &\text{if}& t< \tau\\ \lambda_2 &\text{if}& t\ge \tau \end{array}\right.,&t\in[t_l,t_h]\\ \tau \sim \text{DiscreteUniform}(t_l, t_h)\\ \lambda_1\sim \text{Exponential}(a)\\ \lambda_2\sim \text{Exponential}(b) \end{array}$$Because we have defined $y$ by its dependence on $\tau$, $\lambda_1$ and $\lambda_2$, the latter three are known as the parents of $y$ and $D$ is called their child. Similarly, the parents of $\tau$ are $t_l$ and $t_h$, and $\tau$ is the child of $t_l$ and $t_h$.
At the model-specification stage (before the data are observed), $y$,
$\tau$, $\lambda_1$, and $\lambda_2$ are all random variables. Bayesian "random"
variables have not necessarily arisen from a physical random process.
The Bayesian interpretation of probability is epistemic, meaning
random variable $x$'s probability distribution $p(x)$ represents our
knowledge and uncertainty about $x$'s value. Candidate
values of $x$ for which $p(x)$ is high are relatively more probable,
given what we know. Random variables are represented in PyMC by the
classes Stochastic
and Deterministic
.
The only Deterministic
in the model is $r$. If we knew the values of
$r$'s parents, we could compute the value of $r$
exactly. A Deterministic
like $r$ is defined by a mathematical
function that returns its value given values for its parents.
Deterministic
variables are sometimes called the systemic part of
the model. The nomenclature is a bit confusing, because these objects
usually represent random variables; since the parents of $r$ are random,
$r$ is random also.
On the other hand, even if the values of the parents of variables
switchpoint
, disasters
(before observing the data), early_mean
or late_mean
were known, we would still be uncertain of their values.
These variables are characterized by probability distributions that
express how plausible their candidate values are, given values for their
parents. The Stochastic
class represents these variables.
First, we represent the unknown switchpoint as a discrete uniform random variable:
In [ ]:
from pymc import DiscreteUniform, Exponential, Poisson, deterministic
switchpoint = DiscreteUniform('switchpoint', lower=0, upper=110, value=50)
DiscreteUniform
is a subclass of Stochastic
that represents
uniformly-distributed discrete variables. Use of this distribution
suggests that we have no preference a priori regarding the location of
the switchpoint; all values are equally likely.
Now we create the
exponentially-distributed variables early_mean
and late_mean
for the
early and late Poisson rates, respectively:
In [ ]:
early_mean = Exponential('early_mean', beta=1., value=0.1)
late_mean = Exponential('late_mean', beta=1., value=1)
Next, we define the variable rate
, which selects the early rate
early_mean
for times before switchpoint
and the late rate
late_mean
for times after switchpoint
. We create rate
using the
deterministic
decorator, which converts the ordinary Python function
rate
into a Deterministic
object.
In [ ]:
@deterministic(plot=False)
def rate(s=switchpoint, e=early_mean, l=late_mean):
''' Concatenate Poisson means '''
out = np.empty(len(disasters_array))
out[:s] = e
out[s:] = l
return out
The last step is to define the number of disasters disasters
. This is
a stochastic variable but unlike switchpoint
, early_mean
and
late_mean
we have observed its value. To express this, we set the
argument observed
to True
(it is set to False
by default). This
tells PyMC that this object's value should not be changed:
In [ ]:
disasters = Poisson('disasters', mu=rate, value=disasters_array,
observed=True)
In [ ]:
disasters.logp
Since its represented by a Stochastic
object, disasters
is defined
by its dependence on its parent rate
even though its value is fixed.
This isn't just a quirk of PyMC's syntax; Bayesian hierarchical notation
itself makes no distinction between random variables and data. The
reason is simple: to use Bayes' theorem to compute the posterior, we require the
likelihood. Even though disasters
's value is known
and fixed, we need to formally assign it a probability distribution as
if it were a random variable. Remember, the likelihood and the
probability function are essentially the same, except that the former is
regarded as a function of the parameters and the latter as a function of
the data.
This point can be counterintuitive at first, as many peoples' instinct
is to regard data as fixed a priori and unknown variables as dependent
on the data. One way to understand this is to think of statistical
modelsas predictive models for data, or as
models of the processes that gave rise to data. Before observing the
value of disasters
, we could have sampled from its prior predictive
distribution $p(y)$ (i.e. the marginal distribution of the data) as
follows:
- Sample
early_mean
,switchpoint
andlate_mean
from their priors.- Sample
disasters
conditional on these values.
Even after we observe the value of disasters
, we need to use this
process model to make inferences about early_mean
, switchpoint
and
late_mean
because its the only information we have about how the
variables are related.
In [ ]:
switchpoint.parents
The parents
dictionary shows us the distributional parameters of
switchpoint
, which are constants. Now let's examine `disasters`'s
parents:
In [ ]:
disasters.parents
In [ ]:
rate.parents
We are using rate
as a distributional parameter of disasters
(i.e. rate
is disasters
's parent). disasters
internally
labels rate
as mu
, meaning rate
plays the role of the rate
parameter in disasters
's Poisson distribution. Now examine rate
's
children
attribute:
In [ ]:
rate.children
Because disasters
considers rate
its parent, rate
considers
disasters
its child. Unlike parents
, children
is a set (an
unordered collection of objects); variables do not associate their
children with any particular distributional role. Try examining the
parents
and children
attributes of the other parameters in the
model.
The following directed acyclic graph is a visualization of the
parent-child relationships in the model. Unobserved stochastic variables
switchpoint
, early_mean
and late_mean
are open ellipses, observed
stochastic variable disasters
is a filled ellipse and deterministic
variable rate
is a triangle. Arrows point from parent to child and
display the label that the child assigns to the parent.
As the examples above have shown, PyMC objects need to have a name
assigned, such as switchpoint
, early_mean
or late_mean
. These
names are used for storage and post-processing:
A model instantiated with variables having identical names raises an error to avoid name conflicts in the database storing the traces. In general however, PyMC uses references to the objects themselves, not their names, to identify variables.
In [ ]:
disasters.value
If you check the values of early_mean
, switchpoint
and late_mean
,
you'll see random initial values generated by PyMC:
In [ ]:
switchpoint.value
In [ ]:
early_mean.value
In [ ]:
late_mean.value
Of course, since these are Stochastic
elements, your values will be
different than these. If you check rate
's value, you'll see an array
whose first switchpoint
elements are early_mean
,
and whose remaining elements are late_mean
:
In [ ]:
rate.value
To compute its value, rate
calls the function we used to create it,
passing in the values of its parents.
Stochastic
objects can evaluate their probability mass or density
functions at their current values given the values of their parents. The
logarithm of a stochastic object's probability mass or density can be
accessed via the logp
attribute. For vector-valued variables like
disasters
, the logp
attribute returns the sum of the logarithms of
the joint probability or density of all elements of the value. Try
examining switchpoint
's and disasters
's log-probabilities and
early_mean
's and late_mean
's log-densities:
In [ ]:
switchpoint.logp
In [ ]:
disasters.logp
In [ ]:
early_mean.logp
In [ ]:
late_mean.logp
Stochastic
objects need to call an internal function to compute their
logp
attributes, as rate
needed to call an internal function to
compute its value. Just as we created rate
by decorating a function
that computes its value, it's possible to create custom Stochastic
objects by decorating functions that compute their log-probabilities or
densities. Users are thus not
limited to the set of of statistical distributions provided by PyMC.
Let's take a closer look at our definition of rate
:
@deterministic(plot=False)
def rate(s=switchpoint, e=early_mean, l=late_mean):
''' Concatenate Poisson means '''
out = empty(len(disasters_array))
out[:s] = e
out[s:] = l
return out
The arguments switchpoint
, early_mean
and late_mean
are
Stochastic
objects, not numbers. If that is so, why aren't errors
raised when we attempt to slice array out
up to a Stochastic
object?
Whenever a variable is used as a parent for a child variable, PyMC
replaces it with its value
attribute when the child's value or
log-probability is computed. When rate
's value is recomputed,
s.value
is passed to the function as argument switchpoint
. To see
the values of the parents of rate
all together, look at
rate.parents.value
.
In [ ]:
from pymc.examples import disaster_model
M = MCMC(disaster_model)
In [ ]:
M
In [ ]:
M.late_mean
In this case M
will expose variables switchpoint
, early_mean
, late_mean
and disasters
as attributes; that is, M.switchpoint
will be the same object as disaster_model.switchpoint
.
To run the sampler, call the MCMC object's sample()
method with arguments for the number of iterations, burn-in length, and thinning interval (if desired):
In [ ]:
M.sample(iter=10000, burn=1000)
In [ ]:
M.trace('early_mean')[1000:]
The trace slice [start:stop:step]
works just like the NumPy array
slice. By default, the returned trace array contains the samples from
the last call to sample
, that is, chain=-1
, but the trace from
previous sampling runs can be retrieved by specifying the correspondent
chain index. To return the trace from all chains, simply use
chain=None
.
You can examine the marginal posterior of any variable by plotting a histogram of its trace:
In [ ]:
plt.hist(M.trace('late_mean')[:])
PyMC has its own plotting functionality, via the optional matplotlib
module as noted in the installation notes. The Matplot
module includes
a plot
function that takes the model (or a single parameter) as an
argument:
In [ ]:
Matplot.plot(M)
The upper left-hand pane of each figure shows the temporal series of the samples from each parameter, while below is an autocorrelation plot of the samples. The right-hand pane shows a histogram of the trace. The trace is useful for evaluating and diagnosing the algorithm's performance, while the histogram is useful for visualizing the posterior.
For a non-graphical summary of the posterior, simply call the stats
method.
In [ ]:
M.early_mean.summary()
As with most textbook examples, the models we have examined so far assume that the associated data are complete. That is, there are no missing values corresponding to any observations in the dataset. However, many real-world datasets have missing observations, usually due to some logistical problem during the data collection process. The easiest way of dealing with observations that contain missing values is simply to exclude them from the analysis. However, this results in loss of information if an excluded observation contains valid values for other quantities, and can bias results. An alternative is to impute the missing values, based on information in the rest of the model.
For example, consider a survey dataset for some wildlife species:
Count Site Observer Temperature
------- ------ ---------- -------------
15 1 1 15
10 1 2 NA
6 1 1 11
Each row contains the number of individuals seen during the survey, along with three covariates: the site on which the survey was conducted, the observer that collected the data, and the temperature during the survey. If we are interested in modelling, say, population size as a function of the count and the associated covariates, it is difficult to accommodate the second observation because the temperature is missing (perhaps the thermometer was broken that day). Ignoring this observation will allow us to fit the model, but it wastes information that is contained in the other covariates.
In a Bayesian modelling framework, missing data are accommodated simply by treating them as unknown model parameters. Values for the missing data $\tilde{y}$ are estimated naturally, using the posterior predictive distribution:
$$p(\tilde{y}|y) = \int p(\tilde{y}|\theta) f(\theta|y) d\theta$$This describes additional data $\tilde{y}$, which may either be considered unobserved data or potential future observations. We can use the posterior predictive distribution to model the likely values of missing data.
Consider the coal mining disasters data introduced previously. Assume
that two years of data are missing from the time series; we indicate
this in the data array by the use of an arbitrary placeholder value,
None
:
In [ ]:
x = np.array([ 4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, None, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, None, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])
To estimate these values in PyMC, we generate a masked array. These are specialised NumPy arrays that contain a matching True or False value for each element to indicate if that value should be excluded from any computation. Masked arrays can be generated using NumPy's ma.masked_equal
function:
In [ ]:
masked_values = np.ma.masked_values(x, value=None)
masked_values
This masked array, in turn, can then be passed to one of PyMC's data stochastic variables, which recognizes the masked array and replaces the missing values with Stochastic variables of the desired type. For the coal mining disasters problem, recall that disaster events were modeled as Poisson variates:
In [ ]:
disasters = Poisson('disasters', mu=rate,
value=masked_values, observed=True)
Here rate
is an array of means for each year of data, allocated
according to the location of the switchpoint. Each element in
disasters
is a Poisson Stochastic, irrespective of whether the
observation was missing or not. The difference is that actual
observations are data Stochastics (observed=True
), while the missing
values are non-data Stochastics. The latter are considered unknown,
rather than fixed, and therefore estimated by the MCMC algorithm, just
as unknown model parameters.
The entire model looks very similar to the original model:
In [ ]:
def missing_data_model():
# Switchpoint
switch = DiscreteUniform('switch', lower=0, upper=110)
# Early mean
early_mean = Exponential('early_mean', beta=1)
# Late mean
late_mean = Exponential('late_mean', beta=1)
@deterministic(plot=False)
def rate(s=switch, e=early_mean, l=late_mean):
"""Allocate appropriate mean to time series"""
out = np.empty(len(disasters_array))
# Early mean prior to switchpoint
out[:s] = e
# Late mean following switchpoint
out[s:] = l
return out
masked_values = np.ma.masked_values(x, value=None)
# Pass masked array to data stochastic, and it does the right thing
disasters = Poisson('disasters', mu=rate, value=masked_values, observed=True)
return locals()
Here, we have used the masked_array
function, rather than
masked_equal
, and the value -999 as a placeholder for missing data.
The result is the same.
In [ ]:
M_missing = MCMC(missing_data_model())
M_missing.sample(5000)
In [ ]:
M_missing.stochastics
In [ ]:
Matplot.summary_plot(M_missing.disasters)
MCMC objects handle individual variables via step methods, which
determine how parameters are updated at each step of the MCMC algorithm.
By default, step methods are automatically assigned to variables by
PyMC. To see which step methods $M$ is using, look at its
step_method_dict
attribute with respect to each parameter:
In [ ]:
M.step_method_dict
The value of step_method_dict
corresponding to a particular variable
is a list of the step methods $M$ is using to handle that variable.
You can force $M$ to use a particular step method by calling
M.use_step_method
before telling it to sample. The following call will
cause $M$ to handle late_mean
with a standard Metropolis
step
method, but with proposal standard deviation equal to $2$:
In [ ]:
from pymc import Metropolis
M.use_step_method(Metropolis, disaster_model.late_mean, proposal_sd=2.)
Another step method class, AdaptiveMetropolis
, is better at handling
highly-correlated variables. If your model mixes poorly, using
AdaptiveMetropolis
is a sensible first thing to try.